Improving the learning speed of 2-layer neural networks by choosing initial values of the adaptive weights

نویسندگان

  • Derrick Nguyen
  • Bernard Widrow
چکیده

A two-layer neural network can be used to approximate any nonlinear function. T h e behavior of the hidden nodes tha t allows the network to do this is described. Networks w i th one input are analyzed first, and the analysis is then extended to networks w i t h mult iple inputs. T h e result of th is analysis is used to formulate a method for ini t ial izat ion o f the weights o f neural networks to reduce t ra in ing t ime. Training examples are given and the learning curve for these examples are shown to illustrate the decrease in necessary training t ime. Introduction Two-layer feed forward neural networks have been proven capable of approximating any arbitrary functions [l], given that they have sufficient numbers of nodes in their hidden layers. We offer a description of how this works, along with a method of speeding up the training process by choosing the networks’ initial weights. The relationship between the inputs and the output of a two-layer neural network may be described by Equation (1) H l y = wi . sigmoid(LqX + W b i ) (1) i=O where y is the network’s output, X is the input vector, H is the number of hidden nodes, Wi is the weight vector of the i th node of the hidden layer, Wbi is the bias weight of the ith hidden node, w i is the weight of the output layer which connects the i th hidden unit to the output. The behavior of hidden nodes in two-layer networks with one input To illustrate the behavior of the hidden nodes, a two-layer network with one input is trained to approximate a function of one variable d(z) . That is, the network is trained to produce d(z) given z as input using the back-propagation algorithm [2]. The output of the network is given as

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Cystoscopic Image Classification Based on Combining MLP and GA

In the past three decades, the use of smart methods in medical diagnostic systems has attracted the attention of many researchers. However, no smart activity has been provided in the field of medical image processing for diagnosis of bladder cancer through cystoscopy images despite the high prevalence in the world. In this paper, a multilayer neural network was applied to clas...

متن کامل

INTEGRATED ADAPTIVE FUZZY CLUSTERING (IAFC) NEURAL NETWORKS USING FUZZY LEARNING RULES

The proposed IAFC neural networks have both stability and plasticity because theyuse a control structure similar to that of the ART-1(Adaptive Resonance Theory) neural network.The unsupervised IAFC neural network is the unsupervised neural network which uses the fuzzyleaky learning rule. This fuzzy leaky learning rule controls the updating amounts by fuzzymembership values. The supervised IAFC ...

متن کامل

Wavelet Neural Network with Random Wavelet Function Parameters

The training algorithm of Wavelet Neural Networks (WNN) is a bottleneck which impacts on the accuracy of the final WNN model. Several methods have been proposed for training the WNNs. From the perspective of our research, most of these algorithms are iterative and need to adjust all the parameters of WNN. This paper proposes a one-step learning method which changes the weights between hidden la...

متن کامل

On the convergence speed of artificial neural networks in‎ ‎the solving of linear ‎systems

‎Artificial neural networks have the advantages such as learning, ‎adaptation‎, ‎fault-tolerance‎, ‎parallelism and generalization‎. ‎This ‎paper is a scrutiny on the application of diverse learning methods‎ ‎in speed of convergence in neural networks‎. ‎For this aim‎, ‎first we ‎introduce a perceptron method based on artificial neural networks‎ ‎which has been applied for solving a non-singula...

متن کامل

A Differential Evolution and Spatial Distribution based Local Search for Training Fuzzy Wavelet Neural Network

Abstract   Many parameter-tuning algorithms have been proposed for training Fuzzy Wavelet Neural Networks (FWNNs). Absence of appropriate structure, convergence to local optima and low speed in learning algorithms are deficiencies of FWNNs in previous studies. In this paper, a Memetic Algorithm (MA) is introduced to train FWNN for addressing aforementioned learning lacks. Differential Evolution...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1990